在多机构强化学习(MARL)中,独立学习者是那些不观察系统中其他代理商的行为的学习者。由于信息的权力下放,设计独立的学习者将发挥均匀的态度是有挑战性的。本文研究了使用满足动态来指导独立学习者在随机游戏中近似平衡的可行性。对于$ \ epsilon \ geq 0 $,$ \ epsilon $ -SATISFICING策略更新规则是任何规则,指示代理在$ \ epsilon $ best-best-reversponding to to to the其余参与者的策略时不要更改其策略; $ \ epsilon $ -SATISFIFICING路径定义为当每个代理使用某些$ \ epsilon $ -SATISFIFICING策略更新规则来选择其下一个策略时,获得的联合策略序列。我们建立了关于$ \ epsilon $ - 偏离型路径的结构性结果,这些路径是$ \ epsilon $ equilibium in Symmetric $ n $ - 玩家游戏和带有两个玩家的一般随机游戏。然后,我们为$ n $玩家对称游戏提出了一种独立的学习算法,并为自我玩法的$ \ epsilon $ equilibrium提供了高可能性保证。此保证仅使用对称性,利用$ \ epsilon $ satisficing路径的先前未开发的结构。
translated by 谷歌翻译
Rearrangement puzzles are variations of rearrangement problems in which the elements of a problem are potentially logically linked together. To efficiently solve such puzzles, we develop a motion planning approach based on a new state space that is logically factored, integrating the capabilities of the robot through factors of simultaneously manipulatable joints of an object. Based on this factored state space, we propose less-actions RRT (LA-RRT), a planner which optimizes for a low number of actions to solve a puzzle. At the core of our approach lies a new path defragmentation method, which rearranges and optimizes consecutive edges to minimize action cost. We solve six rearrangement scenarios with a Fetch robot, involving planar table puzzles and an escape room scenario. LA-RRT significantly outperforms the next best asymptotically-optimal planner by 4.01 to 6.58 times improvement in final action cost.
translated by 谷歌翻译
近年来,多任务学习在各种应用程序中都取得了巨大的成功。尽管这些年来,单个模型培训已承诺取得出色的成果,但它忽略了有价值的信息,这些信息可能有助于我们更好地估计一个指标。在与学习相关的任务下,多任务学习能够更好地概括模型。我们试图通过在相关任务和归纳转移学习之间共享功能来增强多任务模型的功能映射。此外,我们的兴趣是学习各种任务之间的任务关系,以从多任务学习中获得更好的收益。在本章中,我们的目标是可视化现有的多任务模型,比较其性能,用于评估多任务模型性能的方法,讨论在各个领域的设计和实施过程中所面临的问题,以及他们实现的优势和里程碑
translated by 谷歌翻译
标准的神经网络可以近似一般的非线性操作员,要么通过数学运算符的组合(例如,在对流 - 扩散反应部分微分方程中)的组合,要么仅仅是黑匣子,例如黑匣子,例如一个系统系统。第一个神经操作员是基于严格的近似理论于2019年提出的深层操作员网络(DeepOnet)。从那时起,已经发布了其他一些较少的一般操作员,例如,基于图神经网络或傅立叶变换。对于黑匣子系统,对神经操作员的培训仅是数据驱动的,但是如果知道管理方程式可以在培训期间将其纳入损失功能,以开发物理知识的神经操作员。神经操作员可以用作设计问题,不确定性量化,自主系统以及几乎任何需要实时推断的应用程序中的代替代物。此外,通过将它们与相对轻的训练耦合,可以将独立的预训练deponets用作复杂多物理系统的组成部分。在这里,我们介绍了Deponet,傅立叶神经操作员和图神经操作员的评论,以及适当的扩展功能扩展,并突出显示它们在计算机械师中的各种应用中的实用性,包括多孔媒体,流体力学和固体机制, 。
translated by 谷歌翻译
我们考虑在$ T $期间的时间范围内为卖方提供大量的$ N $产品。卖方不知道产品线性需求模型的参数,可以动态调整产品价格,以了解基于销售观察的需求模型。卖方旨在最大限度地减少其伪心,即相对于了解潜在需求模型的克莱师的预期收入损失。我们考虑产品之间的稀疏需求关系,以表征产品网络的各种连接属性。特别是,我们研究了三个不同的稀疏性框架:(1)$ L_0 $稀疏性,限制网络中的连接数,(2)偏差稀疏性,这会限制跨产品价格敏感性的大小,并( 3)光谱稀疏性的新概念,其限制网络节点上相似度量的渐近衰减。我们提出了一种动态的定价和学习政策,结合了乐观的乐观情绪,不确定和PAC-Bayesian方法,并表明这一政策在$ N $和$ T $方面实现了渐近最佳性能。我们还表明,在频谱和非对角线稀疏性的情况下,卖方可以在$ N $以$ N $具有伪遗憾的线性,即使网络密集。
translated by 谷歌翻译
Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effects and thus more effective regularization. The latent space is built by a generative model, and in this paper, we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
translated by 谷歌翻译
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R k → R n . Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an 2/ 2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.
translated by 谷歌翻译